target propagation
7261925973c9bf0a74d85ae968a57e5f-AuthorFeedback.pdf
Overall, we argue that stability enforcement (through dynamics and/or plasticity) seems reasonable given current29 neuroscientific theories (e.g., Zenke, Ganguli & Gerstner 2017, "The temporal paradox of Hebbian learning and30 homeostatic plasticity."). Lastly, R3's mention that the pre-activation variableal doesn't appear in the update can be readily explained.
A Theoretical Framework for Target Propagation
The success of deep learning, a brain-inspired form of AI, has sparked interest in understanding how the brain could similarly learn across multiple layers of neurons. However, the majority of biologically-plausible learning algorithms have not yet reached the performance of backpropagation (BP), nor are they built on strong theoretical foundations. Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization. Our theory shows that TP is closely related to Gauss-Newton optimization and thus substantially differs from BP. Furthermore, our analysis reveals a fundamental limitation of difference target propagation (DTP), a well-known variant of TP, in the realistic scenario of non-invertible neural networks. We provide a first solution to this problem through a novel reconstruction loss that improves feedback weight training, while simultaneously introducing architectural flexibility by allowing for direct feedback connections from the output to each hidden layer. Our theory is corroborated by experimental results that show significant improvements in performance and in the alignment of forward weight updates with loss gradients, compared to DTP.
GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible'targets' for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Energy > Oil & Gas (0.47)
- Education (0.46)
- Health & Medicine (0.46)
- Information Technology (0.46)
- North America > United States > Texas (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East (0.14)